Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Biomedica ; 42(1): 170-183, 2022 03 01.
Artigo em Inglês, Espanhol | MEDLINE | ID: mdl-35471179

RESUMO

INTRODUCTION: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. OBJECTIVE: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. MATERIALS AND METHODS: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. RESULTS: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. CONCLUSION: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Assuntos
COVID-19 , Aprendizado Profundo , Teste para COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Tomografia Computadorizada por Raios X
2.
Biomédica (Bogotá) ; 42(1): 170-183, ene.-mar. 2022. tab, graf
Artigo em Inglês | LILACS | ID: biblio-1374516

RESUMO

Introduction: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. Objective: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. Materials and methods: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. Results: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. Conclusion: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Assuntos
Infecções por Coronavirus/diagnóstico , Aprendizado Profundo , Tomografia Computadorizada por Raios X
3.
Biomed Eng Lett ; 12(1): 75-84, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35186361

RESUMO

Cardiac cine-MRI is one of the most important diagnostic tools used to assess the morphology and physiology of the heart during the cardiac cycle. Nonetheless, the analysis on cardiac cine-MRI is poorly exploited and remains highly dependent on the observer's expertise. This work introduces an imaging cardiac disease representation, coded as an embedding vector, that fully exploits hidden mapping between the latent space and a generated cine-MRI data distribution. The resultant representation is progressively learned and conditioned by a set of cardiac conditions. A generative cardiac descriptor is achieved from a progressive generative-adversarial network trained to produce MRI synthetic images, conditioned to several heart conditions. The generator model is then used to recover a digital biomarker, coded as an embedding vector, following a backpropagation scheme. Then, an UMAP strategy is applied to build a topological low dimensional embedding space that discriminates among cardiac pathologies. Evaluation of the approach is carried out by using an embedded representation as a potential disease descriptor in 2296 pathological cine-MRI slices. The proposed strategy yields an average accuracy of 0.8 to discriminate among heart conditions. Furthermore, the low dimensional space shows a remarkable grouping of cardiac classes that may suggest its potential use as a tool to support diagnosis. The learned progressive and generative representation, from cine-MRI slices, allows retrieves and coded complex descriptors that results useful to discriminate among heart conditions. The cardiac disease representation expressed as a hidden embedding vector could potentially be used to support cardiac analysis on cine-MRI sequences.

4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 5570-5573, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892386

RESUMO

Cardiac cine-MRI is one of the most important diagnostic tools for characterizing heart-related pathologies. This imaging technique allows clinicians to assess the morphology and physiology of the heart during the cardiac cycle. Nonetheless, the analysis on cardiac cine-MRI is highly dependent on the observer expertise and a high inter-reader variability is frequently observed. Alternatively, the ejection fraction, a quantitative heart dynamic measure, is used to identify potential cardiac diseases. Unfortunately, this type of measurement is insufficient to distinguish among different cardiac pathologies. This quantification does not exploit all the heart functional information conveyed by cine-MRI sequences. Automatic image analysis might help to identify visual patterns associated with cardiac diseases in the cine-MRI sequences and highlight potential biomarkers. This paper introduces a conditional generative adversarial network that learns a mapping between the latent space and a generated cine-MRI data distribution involving information from five different cardiac pathologies. This net is guided from the left ventricle segmentation and the velocity field that is computed as prior information to focus on the deep representation of salient cardiac patterns. Once the deep neural networks are trained, a set of validation cine-MRI slices is represented in the embedding space. The associated embedding descriptor, in the latent space, is found by minimizing a reconstruction error in the generator output. We evaluated the obtained embedded representation as a disease marker by using different classification models in 16000 pathological cine-MRI slices. The representation retrieved by using the best conditional generative model configuration was used on the classifier models yielding an average accuracy of 90.04% and an average F1-score of 89.97% in the classification task.Clinical relevance-Construction of a topological embedding space, from generative representation, that fully exploits hidden relationships of cine-MRI and represent cardiac diseases.


Assuntos
Cardiopatias , Imagem Cinética por Ressonância Magnética , Coração/diagnóstico por imagem , Cardiopatias/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética
5.
IEEE J Biomed Health Inform ; 24(12): 3456-3465, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32750929

RESUMO

Neovascular age-related macular degeneration (nAMD) is nowadays successfully treated with anti-VEGF substances, but inter-individual treatment requirements are vastly heterogeneous and currently poorly plannable resulting in suboptimal treatment frequency. Optical coherence tomography (OCT) with its 3D high-resolution imaging serves as a companion diagnostic to anti-VEGF therapy. This creates a need for building predictive models using automated image analysis of OCT scans acquired during the treatment initiation phase. We propose such a model based on deep learning (DL) architecture, comprised of a densely connected neural network (DenseNet) and a recurrent neural network (RNN), trainable end-to-end. The method starts by sampling several 2D-images from an OCT volume to obtain a lower-dimensional OCT representation. At the core of the predictive model, the DenseNet learns useful retinal spatial features while the RNN integrates information from different time points. The introduced model was evaluated on the prediction of anti-VEGF treatment requirements in nAMD patients treated under a pro-re-nata (PRN) regimen. The DL model was trained on 281 patients and evaluated on a hold-out test set of 69 patient. The predictive model achieved a concordance index of 0.7 in regressing the number of received treatments, while in a classification task it obtained an 0.85 (0.81) AUC in detecting the patients with low (high) treatment requirements. The proposed model outperformed previous machine learning strategies that relied on a set of spatio-temporal image features, showing that the proposed DL architecture successfully learned to extract the relevant spatio-temporal patterns directly from raw longitudinal OCT images.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Degeneração Macular/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Algoritmos , Progressão da Doença , Humanos , Degeneração Macular/patologia , Retina/diagnóstico por imagem
6.
Biomed Opt Express ; 11(1): 346-363, 2020 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-32010521

RESUMO

Diagnosis and treatment in ophthalmology depend on modern retinal imaging by optical coherence tomography (OCT). The recent staggering results of machine learning in medical imaging have inspired the development of automated segmentation methods to identify and quantify pathological features in OCT scans. These models need to be sensitive to image features defining patterns of interest, while remaining robust to differences in imaging protocols. A dominant factor for such image differences is the type of OCT acquisition device. In this paper, we analyze the ability of recently developed unsupervised unpaired image translations based on cycle consistency losses (cycleGANs) to deal with image variability across different OCT devices (Spectralis and Cirrus). This evaluation was performed on two clinically relevant segmentation tasks in retinal OCT imaging: fluid and photoreceptor layer segmentation. Additionally, a visual Turing test designed to assess the quality of the learned translation models was carried out by a group of 18 participants with different background expertise. Results show that the learned translation models improve the generalization ability of segmentation models to other OCT-vendors/domains not seen during training. Moreover, relationships between model hyper-parameters and the realism as well as the morphological consistency of the generated images could be identified.

7.
Lab Invest ; 98(11): 1438-1448, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29959421

RESUMO

Early-stage estrogen receptor-positive (ER+) breast cancer (BCa) is the most common type of BCa in the United States. One critical question with these tumors is identifying which patients will receive added benefit from adjuvant chemotherapy. Nuclear pleomorphism (variance in nuclear shape and morphology) is an important constituent of breast grading schemes, and in ER+ cases, the grade is highly correlated with disease outcome. This study aimed to investigate whether quantitative computer-extracted image features of nuclear shape and orientation on digitized images of hematoxylin-stained and eosin-stained tissue of lymph node-negative (LN-), ER+ BCa could help stratify patients into discrete (<10 years short-term vs. >10 years long-term survival) outcome groups independent of standard clinical and pathological parameters. We considered a tissue microarray (TMA) cohort of 276 ER+, LN- patients comprising 150 patients with long-term and 126 patients with short-term overall survival, wherein 177 randomly chosen cases formed the modeling set, and 99 remaining cases the test set. Segmentation of individual nuclei was performed using multiresolution watershed; subsequently, 615 features relating to nuclear shape/texture and orientation disorder were extracted from each TMA spot. The Wilcoxon's rank-sum test identified the 15 most prognostic quantitative histomorphometric features within the modeling set. These features were then subsequently combined via a linear discriminant analysis classifier and evaluated on the test set to assign a probability of long-term vs. short-term disease-specific survival. In univariate survival analysis, patients identified by the image classifier as high risk had significantly poorer survival outcome: hazard ratio (95% confident interval) = 2.91(1.23-6.92), p = 0.02786. Multivariate analysis controlling for T-stage, histology grade, and nuclear grade showed the classifier to be independently predictive of poorer survival: hazard ratio (95% confident interval) = 3.17(0.33-30.46), p = 0.01039. Our results suggest that quantitative histomorphometric features of nuclear shape and orientation are strongly and independently predictive of patient survival in ER+, LN- BCa.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal de Mama/patologia , Forma do Núcleo Celular , Adulto , Idoso , Neoplasias da Mama/mortalidade , Carcinoma Ductal de Mama/mortalidade , Connecticut/epidemiologia , Amarelo de Eosina-(YS) , Feminino , Hematoxilina , Humanos , Aprendizado de Máquina , Pessoa de Meia-Idade , Estudos Retrospectivos
8.
Cytometry A ; 91(6): 566-573, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28192639

RESUMO

The treatment and management of early stage estrogen receptor positive (ER+) breast cancer is hindered by the difficulty in identifying patients who require adjuvant chemotherapy in contrast to those that will respond to hormonal therapy. To distinguish between the more and less aggressive breast tumors, which is a fundamental criterion for the selection of an appropriate treatment plan, Oncotype DX (ODX) and other gene expression tests are typically employed. While informative, these gene expression tests are expensive, tissue destructive, and require specialized facilities. Bloom-Richardson (BR) grade, the common scheme employed in breast cancer grading, has been shown to be correlated with the Oncotype DX risk score. Unfortunately, studies have also shown that the BR grade determined experiences notable inter-observer variability. One of the constituent categories in BR grading is the mitotic index. The goal of this study was to develop a deep learning (DL) classifier to identify mitotic figures from whole slides images of ER+ breast cancer, the hypothesis being that the number of mitoses identified by the DL classifier would correlate with the corresponding Oncotype DX risk categories. The mitosis detector yielded an average F-score of 0.556 in the AMIDA mitosis dataset using a 6-fold validation setup. For a cohort of 174 whole slide images with early stage ER+ breast cancer for which the corresponding Oncotype DX score was available, the distributions of the number of mitoses identified by the DL classifier was found to be significantly different between the high vs low Oncotype DX risk groups (P < 0.01). Comparisons of other risk groups, using both ODX score and histological grade, were also found to present significantly different automated mitoses distributions. Additionally, a support vector machine classifier trained to separate low/high Oncotype DX risk categories using the mitotic count determined by the DL classifier yielded a 83.19% classification accuracy. © 2017 International Society for Advancement of Cytometry.


Assuntos
Biomarcadores Tumorais/genética , Neoplasias da Mama/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Mitose , Receptor ErbB-2/genética , Máquina de Vetores de Suporte , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Amarelo de Eosina-(YS) , Feminino , Expressão Gênica , Hematoxilina , Histocitoquímica/métodos , Humanos , Índice Mitótico , Gradação de Tumores , Risco
9.
Sci Rep ; 6: 32706, 2016 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-27599752

RESUMO

Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these tests are expensive, tissue destructive and require specialized facilities. Interestingly BCa grade has been shown to be correlated with the ODX risk score. Unfortunately Bloom-Richardson (BR) grade determined by pathologists can be variable. A constituent category in BR grading is tubule formation. This study aims to develop a deep learning classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei (a tubule formation indicator - TFI) correlates with the corresponding ODX risk categories. This correlation was assessed in 7513 fields extracted from 174 WSI. The results suggests that low ODX/BR cases have a larger TFI than high ODX/BR cases (p < 0.01). The low ODX/BR cases also presented a larger TFI than that obtained for the rest of cases (p < 0.05). Finally, the high ODX/BR cases have a significantly smaller TFI than that obtained for the rest of cases (p < 0.01).


Assuntos
Automação , Neoplasias da Mama/metabolismo , Núcleo Celular/metabolismo , Receptores de Estrogênio/metabolismo , Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/patologia , Feminino , Humanos , Prognóstico , Risco
10.
PLoS One ; 11(7): e0159088, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27421116

RESUMO

Medical diagnostics is often a multi-attribute problem, necessitating sophisticated tools for analyzing high-dimensional biomedical data. Mining this data often results in two crucial bottlenecks: 1) high dimensionality of features used to represent rich biological data and 2) small amounts of labelled training data due to the expense of consulting highly specific medical expertise necessary to assess each study. Currently, no approach that we are aware of has attempted to use active learning in the context of dimensionality reduction approaches for improving the construction of low dimensional representations. We present our novel methodology, AdDReSS (Adaptive Dimensionality Reduction with Semi-Supervision), to demonstrate that fewer labeled instances identified via AL in embedding space are needed for creating a more discriminative embedding representation compared to randomly selected instances. We tested our methodology on a wide variety of domains ranging from prostate gene expression, ovarian proteomic spectra, brain magnetic resonance imaging, and breast histopathology. Across these various high dimensional biomedical datasets with 100+ observations each and all parameters considered, the median classification accuracy across all experiments showed AdDReSS (88.7%) to outperform SSAGE, a SSDR method using random sampling (85.5%), and Graph Embedding (81.5%). Furthermore, we found that embeddings generated via AdDReSS achieved a mean 35.95% improvement in Raghavan efficiency, a measure of learning rate, over SSAGE. Our results demonstrate the value of AdDReSS to provide low dimensional representations of high dimensional biomedical data while achieving higher classification rates with fewer labelled examples as compared to without active learning.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Algoritmos , Mama/patologia , Neoplasias da Mama/diagnóstico , Feminino , Regulação Neoplásica da Expressão Gênica , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Mitose , Neoplasias Ovarianas/diagnóstico , Ovário/patologia , Reconhecimento Automatizado de Padrão/métodos , Próstata/patologia , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/genética , Proteômica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...